Optimal convergence of on-line backpropagation

نویسندگان

  • Marco Gori
  • Marco Maggini
چکیده

Many researchers are quite skeptical about the actual behavior of neural network learning algorithms like backpropagation. One of the major problems is with the lack of clear theoretical results on optimal convergence, particularly for pattern mode algorithms. In this paper, we prove the companion of Rosenblatt's PC (perceptron convergence) theorem for feedforward networks (1960), stating that pattern mode backpropagation converges to an optimal solution for linearly separable patterns.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Rate Schedules for Faster Stochasticgradient

Stochastic gradient descent is a general algorithm that includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate (both adap-tive and xed functions of time) often perform quite poorly. In contrast, our recently proposed class of \search then converge" (STC) learning rate schedules (Darken and Moody, 1990b, 1991) display th...

متن کامل

Towards Faster Stochastic Gradient Search

Stochastic gradient descent is a general algorithm which includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate 1] (both adaptive and fixed functions of time) often perform quite poorly. In contrast, our recently proposed class of "search then converge" learning rate schedules (Darken and Moody, 1990) display the theore...

متن کامل

Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradi...

متن کامل

Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework , both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry bet...

متن کامل

Training Feed-forward Neural Networks Using the Gradient Descent Method with the Optimal Stepsize

The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation (EBP), is an iterative gradient descend algorithm by nature. Variable stepsize is the key to fast convergence of BP networks. A new optimal stepsize algorithm is proposed for accelerating the training process. It modifies the objective function to reduce the computational complexity of the Jacobi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 7 1  شماره 

صفحات  -

تاریخ انتشار 1996